perm filename NOTES.STR[LSP,JRA]1 blob
sn#100798 filedate 1974-05-08 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00018 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00003 00002 There are some significant discomforts in Hoare's suggestions of recursive
C00010 00003 pass 2 reconsidered.
C00015 00004 Any way here are psuedo ECL defintions of Hoare's stuff
C00019 00005 more examples
C00021 00006 I've been thinking about my proposed notation. A slight extension
C00024 00007 This concerns what I called structured LISP. It's half way between
C00026 00008 generic is a programming construct to dispatch on data-types defined by
C00028 00009 There is another programming construct to handle data defintions on
C00030 00010 Here are what appear to be immediate benefits
C00033 00011 sodden thoughts:
C00039 00012 My discomfort goes deeper than just a bad frame for TSL. I believe
C00049 00013 MONKEY BUSINESS
C00050 00014
C00054 00015 AXIOMS AND R. OF I.
C00055 00016 outline
C00058 00017 moron "on"
C00061 00018
C00062 ENDMK
C⊗;
There are some significant discomforts in Hoare's suggestions of recursive
data structures and corresponding programming language.
The use of typing is good, an early and clean advocation of such
was Cheatham in ELF (clean ≡≡ no damn coercion). What is disturbing in
Hoare's proposal is the redundancy. If type checking is to mean anything
then the types of actual paramenters are checked. That seems obvious.
But that checking entails looking at the structure of the arguments
(assuming they are not primitive d.t.'s);but then all that information is
thrown away and a pattern matcher is then used in the body of the function
to extract it all over again!
He says this is better than car-cdr chains. Perhaps, but I claim that even
in untyped LISP deep car-cdr chains shouldn't appear if the functions are
written cleanly. Deep car-cdr chains often correspond to looking into the
substructure of a data type. This behavior should be discouraged.
Typed LISP, indeed typed X, should have STRUCTURES as data types; i.e.
sequences of the form name-type. The functions operating on these objects
can manipulate the (named)components. If sub-components are to be examined
call a subfunction whose typed paramenter corresponds to the type of the
subcomponent. If seems to me that if structured programming means anything
then defining data structures in levels, allowing operations only within those
levels; and crossing levels within a function is to be frowned upon.
This way car-cdr chains (or their equivalent) go away for one of two reasons:
1. they correspond to crossing levels of defintions. This should be discouraged.
2. they are an attempt to define a complex structure; in which case
typed LISP could be invoked to make a new type, or in real LISP, we should
define a function (or more efficiently, a macro) to manipulate that "structure".
In either case, in the machinery of the typing or simulation by macros, access
to sub-substructure goes away. Therefore a pattern matcher requied by Hoare
is too expensive; direct access by the named components is far superior.
If you don't like the names "car" and "cdr" then define new type in typed LISP
or defin a trivial macro in real LISP.
In the simplest cases of function calling in typed LISP
you could think of the actual-parameter checker
as knowing about the body of the function, and as soon as it finds a type
match, fire the appropriate piece of the function.
Here are Hoare's examples in pseudo typed LISP.
In typed LISP the parameter list is a structure, rather than a tuple as in
real LISP.
let SEXPR be ATOM| DOTTED_PAIR
let DOTTED_PAIR be structure[first:SEXPR; second:SEXPR]
let ATOM be ID
car:SEXPR <= λ[[x:DOTTED_PAIR] first[x]]
cdr:SEXPR <= λ[[x:DOTTED_PAIR] second[x]]
car and cdr are undefined for atomic.
car*:SEXPR <= λ[[x:SEXPR][atom[x] → LOSE; T → first[x]]
if you wish; notice the body of the function is really part of the type
checking.
(It seems to me that "cases" is really just the old "select" of LISP)
atom:BOOLEAN <= λ[[x:SEXPR] cases x of id[x] → T;T → NIL]
equals:BOOLEAN <= λ[[l1:SEXPR; l2:SEXPR]
[atom[l1] → [atom [l2] → l1 = l2 ; T → NIL]
....
]]
Again the body of the function can be "fired" as the types are checked.
Perhaps you could think of all computation as just glorified type checking
or syntax analysis, just as ACTORS thinks of all data structures as programs,
but there seem to be good reasons for separating data structures from programs.
If nothing else, prgrams are more dynamic than data structures; separating
out the static parts of programs should make them more understandable.
Thinking about "2" as a program doesn't really help the intuition.
I tend to think of data structure extensions more as notational aids
than computational improvements. In that vein I think it is very important
to include I/O conventions for new data types. For example, in the prover
a clause is really a data-type; it has an input and output parser; and
god help anyone who has to see the underlying representation! Perhaps
I/O considerations are premature in a theoretical study, but they aren't
in designing a language.
pass 2; reconsidered.
it is godd to be able to refer to substructure implicitly
eg in deriv[sum[u;v]] → sum[deriv[u] deriv[v]]
type checking on entrance can have minimal cost.
First, I thought you were talking about data STRUCTURES rather
than data TYPES. The disticntion is probably worth making.
again I would say LISP has dotted pairs and lists as data strucutres
because they both satisfy my criteria: selectors,generatore, I/O,....
Dotted pairs are primitive and lists are defined over them.
The distinction could also be exemplified by a choice of languuages for
implementation of Hoare's data types(structures).
For actual programming a slight extension
of ECL is better; for developing formal tools probably LISP is
better. Hoare's use of pattern matching to combine testing and selection
is a useful notation,which by the way has been around for a long time.
I think we can do better. Most of the mechanisms are in ECL.
It is a very good idea to be able to select without explicit names.
Seldom, if ever, are you interested in testing a structure without also
wanting to do something with the components if the test is successful.
One of the things which is disturbing is the implication in
the notation that an general pattern maatch is being used, whereas
you should really look at a value in the language as a structure:
a sequence of name-value pairs. The actual parameter list is a structure
consisting of formal parameters and types; each actual parameter is a structure.
The combined tester-selector is simply an extension of ECL's GENERIC,
which would test the structure and if successful, extract components.
(GENERIC came from BASEL, which came from ELF, which ...)
A pattern match to extract components of components, etc. would be
a violation of structured programming and layering, what ever they mean
(buzz, buzz). This answers another question about Hoare, namely his objection
to deep car-cdr chains. I claim that good (structured) LISP shouldn't have
deep car-cdr chains anyway. These chains appear for one of two reasons.
1.) you are trying to simulate a data structure in LISP. The chains
are selecting components. Rewrite the program using subr calls, or
better yet macros to manipulate the components.
2. you are trying to look at sub-sub components as above. Rewrite
so as not to cross levels of data structures.
Obviously these can both be handled cleanly in a typed LISP, or ECL extension.
I should probably make a disclaimer here: I haven't really looked at ECL
since UCLA. Wegbreit's thesis was a clean but unimplemented description
of the language. Hopefully they haven't fucked it up; it was a nice
naatural extension of the ideas of LISP style language definition.
Any way here are psuedo ECL defintions of Hoare's stuff
some kind of mode definition:
SEXPR is ATOM or DTPR
DTPR is struct[car:SEXPR; cdr:SEXPR]
equal ← EXPR(x:SEXPR, y:SEXPR;BOOLEAN)
generic(x,y)
[ATOM,ATOM] => x=y;
[DTPR(a,b),DTPR(c,d)] => {equal(a,c) => equal(b,d);FALSE};
TRUE => FALSE ;
end
This is easier to read than nested case statements, does the selection and
testing all at once. There are obvious ways to suppresss unnecessary selection
if not needed so as to increase efficiency of execution (which by the way should
be very efficient anyway). For example DTPR(a,*) could mean test the structure
and if successful assign first component to "a",ignoring second.
DTPR by itself could mean test associated structure w.o. selection.
Selection w.o. testing can always be done via component names; this should be
frowned upon.
Here's differentiation
EXP is VAR or CONST or SUM or ...
SUM is struct[sum1:EXP; sum2:EXP]
etc.
diff ← EXPR(e:EXP, x:VAR;EXP)
generic(e)
[VAR] => {x=e => 1;0};
[CONST] => 0;
[SUM(u,v)] => sum(diff(u),x),diff(v),x));
...
end;
The car function could be:
car ← EXPR(x:DTPR;SEXPR) x.car select car-th component of x
car should not be defined for ATOM !
Notice here we use a named component. Seldom should names be needed,
but if so, they are available.
atom ← EXPR(x:SEXPR;BOOLEAN)
generic(x)
[ATOM] => TRUE;
TRUE => FALSE;
end;
I think this is superior notation. I just thought it up last night so
it might have problems, but I doubt it.
Obviously it could be used in LISP; there is no hairy pattern matcher
here. Again it is important to notice that we should never, never look
any further than components, never at components of components. If
structured programming means anything, it must include structured data.
I really don't believe, a'la ACTORS, that data are just stupid
programs.Maybe the converse IS true. Once the data structures are sorted out
correctly there isn't any (stupid) program.
more examples
1. append. This is different beecause not structural but sequential.
LIST is seq of SEXPR
append ← EXPR[x:LIST;y:LIST;LIST]
on(x)
[ε] => y;
[(a⊗b)] => list(a,append(b,y))
end
Notation: (a⊗b) means a is first b is rest.
(a,b) means a is first b is second.
Thus there should be a different programming construct for different data structure
if we really are using types or structures.
A sequence CAN be represented as a structure (just as the integers can) but
clarity should over-rule frugality of notation.
The essence os sequences is their emptyness, and we will allow naming
of components on accessing, and if the sequence is one of structures we
can extract on that structure.
1. the deadly polyadd
POLY is seq of TERM
TERM is struc[coef:NUM;expo:EXPO]
EXPO is seq of NUM
polyadd[p:POLY;q:POLY;POLY]
on(p,q)
[ε] => q
[*,ε] => p
[(p1:TERM(a,b)⊗c),p2:TERM(d,e)⊗f)] => {b=e =>{a=-d => polyadd(c,f);
poly(term(a+d,b),polyadd(c,f))}
a>d => poly(p1,polyadd(c,q))
poly(q1:polyadd(p,f))}
I've been thinking about my proposed notation. A slight extension
works VERY nicely. I have written a "structured LISP", half
way between LISP and ECL. It has user defined data structures
and program constructs which parallel the data definitons. This
allows the removal of explicit selectors for all but the primitive
operations( assuming proper use of d. structures...it actually appears
that whenever you feel the urge to use a selector you really
should define a new data structure.) The evaluator( written in the language)
is very transparent..it even handles function closures rather than
just funargs.. the unify algorithm is equally simple.
all of unify ... data structure definitions and all subfunctions..is one page
the evaluator is three..1 for data, 2 for evaluators.
Functions turn out to become mostly type checkers, dispatching on the type to
sub-functions.
It appears that you CAN replace complicated programs operating on simple
data with simple programs operating on smart data structures.
I'm planning to think about more formal properties tonight (fucking with JRB today).
Since data structures are static, it should be easier to prove properties
here than trying to prove properites of complicated programs (which are dynamic)
I'd rather not advertise yet since the ideas seem promising
but are very easy (after writting a book on LISP).
By the way I don't think d.s. should be directly recursive.
This concerns what I called structured LISP. It's half way between
LISP and EL1. The basic idea is to keep the LISP-style semantics but
map the language onto a richer class data structures than dotted pairs.
The mapping is done like EL1, taking the BNF equations to pointer,
rows and structures. I'll allow user defined modes with typed arguments
in functions defined over the modes. For example
Here's differentiation
The data defintions (or abstract syntax):
exp ::= var
::= const
::= sum
...
sum ::= struct[sum1:exp;sum2:exp]
the program(most of the syntax is irrelevant):
diff[e:exp; x:var]exp
generic(e)
[var] => {x=e => 1;0};
[const] => 0;
[sum(u,v)] => sum(diff(u,x),diff(v,x));
...
end;
generic is a programming construct to dispatch on data-types defined by
BNF equations with alternatives.
The type-checker may make assignments of local variables to values of components
of a data structure: thus sum(u,v).This is like Hoare, which is like ...∞.
Thus "sum" on LHS of => is a predicate and selector; "sum" on RHS of
=> is a constructor. The test expression may NOT go deeper than examination
and assignment of components of data structures; NEVER look at sub-components.
(structures programming?)
generic may also run on more than one arg. example:
sexpr ::= atom
::= dptr
dtpr ::= struct[car:sexpr;cdr:sexpr]
equal[x:sexpr;y:sexpr]boolean
generic(x,y)
[atom,atom] => x=y;
[dtpr(a,b),dtpr(c,d)] => {equal(a,c) => equal(b,d);FALSE};
FALSE ;
end
The car function could be:
car[x:dtpr]sexpr
x.car select car-th component of x
car should not be defined for ATOM !
Notice here we use a named component. Seldom should names be needed,
but if so, they are available.
atom[x:sexpr]boolean
generic(x)
[atom] => TRUE;
FALSE;
end;
There is another programming construct to handle data defintions on
sequences. "on" runs on sequences: ε matches empty sequence.
(x⊗y) means x is first and y is rest;
1. the deadly polyadd
poly ::= seq[term]
term ::= struct[coef:number;expo:expo]
expo ::= seq[num]
polyadd[p:poly;q:poly]poly
on(p,q)
[ε] => q
[*,ε] => p
[(p1:TERM(a,b)⊗c),p2:TERM(d,e)⊗f)] => {b=e =>{a=-d => polyadd(c,f);
poly(term(a+d,b),polyadd(c,f))}
b>e => poly(p1,polyadd(c,q))
poly(q1,polyadd(p,f))}
end;
I am not too enchanted with the "on notation", but it suffices.
may 7: another stab at "on"
|polyadd[p:poly;q:poly]poly
| on(p,q;
[ε] => q
[*,ε] => p
[(p1:TERM(a,b)⊗c),p2:TERM(d,e)⊗f)] => {b=e =>{a=-d => polyadd(c,f);
poly(term(a+d,b),polyadd(c,f))}
b>e => poly(p1,polyadd(c,q))
poly(q1,polyadd(p,f))}
end;
The implementation of such a language should be quite efficient:
"[ ]" does not require an expensive pattern match. It only matches types
and perhaps associates immediate components with variables. Obviously
sequences and structures can be efficiently stored. The calling sequence
is efficient: essentially a dispatch on type.
Here are what appear to be immediate benefits
1. richer data structures than dotted-pairs; user defined d. structures.
2.combination of selectors and predicates into a simple matcher.
3. constructs of language parallel the data typing: generic and on.
Makes for simple and clean programs.
4. excellent readability. The evaluator for language consists of one page
of data structure definitions and 2 pages of program.
5. efficient data and program
6. formal properties, though unexplored, look promising. Basically this
approach replaces simple data structures and complicated programs, with
compilcated data and simple programs. Most programs simply become type-
checkers, dispatching on types to other simple programs. This should
have some benefits in forcing properties which normally have to be proved
about a program, back to be AXIOMS about the chosen data structures.
And relationships involving data structures( static) should be easier
to attack that those involve program( dynamic).
Most of the ideas hve been around a long time; what seems novel is their
combination and the extended use of generic. This formualtion is only about
two days old; there maybe holes. But it does seem natural and interesting.
I would be interested in your comments.
j allen
sodden thoughts:
1. why not attach proof rules or hints to d.s. as well as i,o , etc specs.
in DDB.
2. why not type "output"
e.g.
clause ::= struct[test:form;conseq:form] where result of test is type boolean.
no, this kind of crap should be covered by "semantics" part of pseudo
sdio.
3. note on construction of new objects: ususally better than cons since we need
only get a canned d.s. of appropirate size and plug components, rather
than sequence of conses.
4. storage of sequences at worst is like cons, at best a block of cells.
access of i-th element is complicated, but better than cddr chain usually.
(like stack "blip"s)
5. 4, above, also relates to allocation of sequences in general. if you
don't know the length, you have to cons-it and pay through the ass.
6. the efficiencies of d.s. manipulation should be proportional to
obvious efficiencies of function calls...just as we can express
all λ-functions as single-variable lambdas (with obvious loss of clarity
and efficiency) we can express what are basically sequence operations
as cons(cons ...)) which should also require that you pay! Thus the
operations which make eval go should be the same as those which make
progs in lang go. I.e. sequence ops should be the rule; cons-ops
should be the exception.
It seems then that the primitives need only be seq and struct. eg
stack goes away since it is a special case of seq.(NOT other way around!!)
We should not allow easy programming of operations less primitive than those
"natural" to the implementation of the language. Thus cons should be more
expensive than seq.
7. should eval always bring back a struct[value:form;st:environ]?
It has to for closures.
8. on seq ops: what about f-i acting on x-i and y-i for all i?
f on x-i and y-i is easy.
9. related to 6, in notes somewhere but remember.
reasons for car-cdr chains 1 simulate d.s. 2. cross "levels" of d.s.
10. concerning displeasure about cons: how often do you "need" to stack single
elements rather than blocks?
11. TA-DA!! HOW TO HANDLE FRAGMENTATION: result stack(s). probably as aux arg
to "on".
12. HOW TO DESIGN A LANGUAGE:
while Ln ≠ Ln+1 do
x ← constructs to define interpreter in Li
y ← constructs to make x useable for user
restructure x to take advantage of y
z ← constructs to make efficient implementation of y
u ← constructs to make efficient user programs
restructure z to take advantage of u
end;
will get a "fixed point" in two spaces: minimal language for definition
of evaluator in language; and minimal language for efficient implementation
on "reasonable machine.
benefits: 1. usual-understand language ↔ understand eval
2. things efficient in "real" implementation are efficient in user prog.
13. should store "semantics" with d.s. def. e.g. d.s which is palindrome.
i.e. "semantics" (as usual) covers cntxt. sens. as well as "real" semantics.
BE CAREFUL! about storing real semantics, because don't want typing
to depend on programming environ. something is or isnot brand x -- ALWAYS!
So things like palindrome of seq of length n are o.k.; things like
... "and second element of ds. is rel. prime to y ..." are not.
14. actually tying program contructs to d.s. is not new...LISP does it
implicitly simple recursion on dtprs is it.
15. adding d.s to language must be done carefully; not in vaccuo. c.f.
adding recursion to fortran. so must be integrated into language: recursion
for travesing d.s. which are dtprs. generic and on for traversing structures
and seq's resp.
16. on attempts to get synthesized and inherited attributes a'la Knuth
My discomfort goes deeper than just a bad frame for TSL. I believe
that it is just one indication of a mismatch between a programming language
and its application. By that I mean that programming languages are
designed for writing programs not for specifying programs. The tasks
are quite different. To write a program requires only a language of sufficient
power such that I can map my algorithm onto a running piece of code.
Specifying a program is a much more difficult task, corresponding to
the mental gymnastics which the programmer performs as he goes
mentally from an intuition to a program. Clearly the basic problem
in APG is program specification. When we sit down to express an idea as a
program the first thing we do is decide what the representation of our
intuitive domain is to be. In most cases of programming, the input and
output of the program is %2not%* in the user's domain, but is a representation
of it. (Just as the input to %3eval%* is not a LISP expression but is a representation
of it.) How then can we begin to examine representation of information.
Certainly we could say "It's all zero's and one's!", but that is pointless.
I believe is is equally pointless to force immediate representation of the user's
thoughts into the machine representation at the higher level of machine words,
or fixed predefined data types (like INTEGER, FLOATING, ...). Frequently
the whole programming problem is representational; once the proper data
structure has been fixed the programming is trivial; and even worse, if
a poor representation is chosen the programming task becomes monstrous.
So representations are "a dime a dozen"; what is needed is a natural representation.
Assuming then for the moment that the philosphy "write me a program to
do ..." is too cavalier,
how can we begin to allow the user to represent %2his%* information
in %2his%* notation such that %2our%* program generator can make sense of it
without requiring too much over specification.
I claim there usually is a natural distinction between data structues and program.
Thinking of "2" as a program does nothing for my intuition.
What then are data structures in a very pragmatic sense? They are, first,
static quantities: once an object is of type X is is of type X forever; if
2 is an INT in program A, it is an INT in program B. How do we
most usually specify static quantities? BNF equations is standard technique.
So the user should be able to specify is data structures in BNF? A little
more is needed to be satisfactory. There are properties which are static
but are not (naturally, at least) specified in context free notation.
For example an integer is or is not a member of the fibonacci sequence;
a string is or is not a palindrome. These kinds of extra conditions specify
context sensitive aspects and should be allowed. What should be disallowed
is specification of "real" semantics in data-structure description.
It is not unreasonable to claim that if a prospective user cannot
specify is data structures precisely, then he has no hope of specifying a program
and we are better off being done with him as soon as possible.
Certainly the output from the program should be equally succeptible to data-structure
analysis.
Before the cry goes up,""You are turning computation into syntax directed
input and output!", let me say "yes and no". A great deal of computation
IS sdio (cf. TSL); the more programming tasks which can be represented naturally
as such the better. SDIO is a natural way to do APG. To answer "no", we simply
point out that we are only specifying the I/O data structures precisely as
augmented BNF. The program is the transformation from I to O. The program may
be quite complex. The point here is to simplify the programming as much as
possible (without distortion) by careful specification of the data structures.
Consider the infamous "fibonacci example" of Balzer: "The goal is, given N,
to produce the Nth Fibonnachi(sic) number."
The goal is poorly stated: what is really asked for is the Nth element of a
sequence. The first problem is then to specify the sequence; then is is
trivial to extract the Nth element. The distinction seems trite, but the
resulting specification is strikingly different from the usual.
.BEGIN NOFILL;
So Fibonacci is NOT a program it is a sequence; it is a data structure.
fib ::= seq[int] such that if |fib| = 1 then fib = (0)
|fib| = 2 then fib = (0,1)
o.w.if 2<|fib| = n then ∀i≤n fib[i] = fib[i-1]+ fib[i-2]
How do you calculate the Nth element of fib? Well, how do you calculate the
Nth integer? You use the successor function.
So Suc is a generic function:
If x is integer then Suc(x) is x + 1;
if x is real then Suc(x) is x + 1.0;
if x is a fib then Suc(x) is: if |x| = 1 then (0,1)
else fib(x,x%4|x|%* + x%4|x|-1%*)
A. It seems easy to generate Suc for the data structure definition of fib.
B. It is easy to generate a prog to find the Nth fibonacci number from Suc
C. More important, it is easy to translate this program, indeed a whole class of progrms
into "efficient" loop progs.
TSL is even simpler, being a straight SDIO problem:
.begin group
INPUT
prefix ::= var%4I%*
::= f_args%4I%*
f_args%4I%* ::= struct[fn%4I%*:id;args%4I%*:terms%4I%*]
terms%4I%* ::= seq[prefix]
.end
.begin group
OUTPUT
postfix ::= var%4O%*
::= f_args%4O%*
f_args%4O%* ::= struct[args%4O%*:terms%4O%*;fn%4O%*:id]
terms%4O%* ::= seq[postfix]
.END
.BEGIN GROUP
TRANSFORMATION
var%4O%* = var%4I%*
fn%4O%* = fn%4I%*
args%4O%* = args%4I%*
.END
.BEGIN GROUP;
PROGRAM FOR TRANSFORMATION
pre_to_post[e;prefix]postfix
generic(e)
[var%4I%*] =>e
[f_args%4I%*(u,v)] => f_args%4O%*(arg_list(v),u)
end;
arg_list(a:terms%4I%*)terms%4O%*
on(a)
[ε] => ε
[(u⊗v)] => terms%4O%*(pre_to_post(u),arg_list(v))
end;
(** N.B. there is a better notation for this kind of "on", but at least this will work)
.END
MONKEY BUSINESS
ontop ::= seq[move]
move ::= standon
::= stepup
standon ::= struct[m:animal;l:loc] s.t. at(m,l)
stepup ::= struct[m:animal;o1:obj;o2:obj] s.t. on(m,o1) ∧ on(o2,o1)
.END
conditions on ontop:
burstall tree sort
________________________________
a.d.s.
list ::= seq[item]
item ::= atom
tree ::= node
::= tip
::= niltree
node ::= struct[trL:tree;it:item;trR:tree]
tip ::= struct[it:item]
niltree ::= struct[]
l_or_t ::= list
::= tree
i_l_or_t ::= list
::= tree
::= item
___________________________________
functions
totree[i:item;t:tree]tree
generic(t)
[niltree] => tip(i)
[tip(i1)] => { i1≤ i → node(t,i,tip(i));node(tip(i),i1,t)}
[node(t1,i1,t2)] => {i1≤i → node(t1,i1,totree(i,t2));node(totree(i,t1),i1,t2)}
end;
maketree[is:list]tree
on(is;*;niltree(),totree)
flatten[t:tree]list
generic(t)
[niltree] => ε
[tip(i)] => ε
[node(t1,*,t2)] => list(flatten(t1),flatten(t2))
end;
sort[is:list] list
flatten(maketree(is));
≤[x:i_l_or_t; y:i_l_or_t] boolean
generic(x,y)
[item,item] => less_eqp(x,y)
[item,list] => {on(y;TRUE,λ[[u,v][not(x≤u)→FALSE;v])}
[list,list] => {on(x;TRUE,λ[[u,v][not(u≤y)→FALSE;v])}
[item,tree] => {generic(y)
[niltree] => TRUE
[tip(i) => x≤i
[node(t1,*,t2)] => x≤t1 ∧ x≤t2}
[tree,tree] => {generic(x)
[nilltree] => TRUE
[tip(i)] => i≤y
[node(t1,*,t2)] => t1≤y ∧ y≤t2}
end;
****** new on is screwed ↓↓↓ here.( i⊗is) => i≤is lossage
ord[x:l_or_t] boolean
generic(x)
[list] => {on(x)
[ε] => TRUE
[(i⊗is)] => i≤is ∧ ord(is)}
[tree] => {generic(x)
[niltree] => TRUE
[tip] => TRUE
[node(t1,i,t2)] => ord(t1) ∧ ord(t2) ∧ t1≤i ∧ i≤t2}
end;
Try proof of ord(totree(i,t)) for ordered t,
ord(totree(i,t)) reduces (outside-in)to:
generic(t)
[niltree] => ord(tip(i))
[tip(i1)] => ord({ i1≤ i → node(t,i,tip(i));node(tip(i),i1,t)})
[node(t1,i1,t2)] => ord({i1≤i → node(t1,i1,totree(i,t2));node(totree(i,t1),i1,t2)})
end;
i: ord(tip(i)) ==> TRUE
ii: a: assume i1≤i;
look at ord(node(t,i,tip(i)))
? ord(tip(i1))∧ord(tip(i)) ∧ tip(i1)≤i ∧ i≤tip(i))? n.b. t ≡ tip(i1)
yes yes yes yes
b: sim.
iii: a: yawn
AXIOMS AND R. OF I.
a'la Burstall, for structures
generic(c(a,b, ...,c)
[c(x,y, ...,z)] => α(x,y, z)
________________________________________
α(a,b, ...,c)
for sequences
on(ε)
[ε] => α( .... )
____________________
α( ....)
on( (x1,x2, ,xn) )
[(y1⊗y2)] => α(y1,y2, ....)
______________________________
α(x1,x2,....)
needs more
outline
what wrong with LISP?
basically too few modes(only atoms, dtpr, and lists)
marginal on closure..not clear how important this is.
want modes
best for clarity is AMBIT
modes ≡ shapes, predeclared
prog ≡ match shape, test, modify
want to do for usual lang
prog
d.s. dfn.
algorithms
end
like LISP semantics
want interpreter in lang
also want efficient implementation expressible naturally in lang.
(i.e. better than alist)
look at ECL defintion which is extension of LISP; to struct and seq, rather than list.
map BNF to d.s.
therefore use as d.s. dfn AND prog rep as d.s.
programming constructs
full typing of args (modulo covering)
programming should be heavy on type manipulation
therefore generic
therefore on
implementation needs better than "on".
it attempts to handle synthesized and inherited attributes
notes:
easy map to FOL for correctness proofs.
perhaps better for APG since natural d.s.=> smaller prog
and makes you really think NATURALLY about problem, rather than "gee, write me a progr..."
conclusions
easier to write programs
constant "visual mode checking".
smaller progs, easier to understand
should be more efficient, type checking by compiler easy
things to do
"on" is fucked up
seq implementation is marginal, result stack may win
what about assignments
how good is full closure; should eval always return struct[val;environ]?
moron "on"
try on(x, ...,s;y,f;z,g)
g(f(x1, ...s1,y),
g(f(x2 ...s2,f(x1 ... s1,y)),
....
g(f(xn, ...sn, f( ...f(x2, ...s2(f(x1 ... s1 ,y))...) z)...)
i.e. f gets applied L→R ; g gets applied R→L.
y and z are terminating values.
abbreviations:
i. on(x;*;y,f) is on(x;*,λ[x,y]x; y,f)
is f(x1, f(x2, ... f(xn,y) ...)
ii on(x;y,f) is on(x;y,f;*) is on(x;y,f;*,λ[x,y]y)
is f(xn, f( .....,f(x2, f(x1,y) ....)
examples.
_____________________________
append[x:list;y:list]list
on(x;*;y,cons);
means cons(x1,cons(x2, (cons (xn,y)))..)))
equal_as_seq[x:list;y:list]boolean
on(x,y;TRUE,λ[[x,y,z][not(eq(x,y))→ FALSE;TRUE=>z]); * "→" means exit with value
list_of_bool[x:list;y:list]list
on(x,y;TRUE, λ[[x;y;z]eq(x,y)];NIL,cons );
member[x:atom;y:list]boolean
on(y;FALSE,λ[[u,v][eq(u,x)→ TRUE;TRUE => v]])
and now for,ta-da
reverse[x:list] list
on(x;NIL,cons)
i.e.:
reverse[x:list] list
on(x;NIL,cons;*,λ[[x,y] y])
example: member(A,(B C D)) is f(D, f(C, f(B, FALSE)))
=f(D,f(C,FALSE))
=f(D,FALSE)
=FALSE
example: member(A,(A C D)) is f(D, f(C, f(A, FALSE)))
= TRUE (immediately)
append( (A B C), (D E)) is cons(A, cons(B, cons(C,(D,E))))
l_o_b((A B C),(A D E)) is
cons(f(A,A,TRUE), cons(f(B,D,f(A,A,TRUE)), cons(f(C,E,f(B,D,f(A,A,TRUE))),NIL)))
cons(TRUE, cons(FALSE, cons(FALSE,NIL)))
= (TRUE FALSE FALSE)
NNB!!!! Actual computation is much more efficient. repitition of computation is
NOT done!!!